2 research outputs found

    DATA-DRIVEN TECHNIQUES FOR DIAGNOSING BEARING DEFECTS IN INDUCTION MOTORS

    Get PDF
    Induction motors are frequently used in many automated systems as a major driving force, and thus, their reliable performances are of predominant concerns. Induction motors are subject to different types of faults and an early detection of faults can reduce maintenance costs and prevent unscheduled downtime. Motor faults are generally related to three components: the stator, the rotor and/or the bearings. This study focuses on the fault diagnosis of the bearings, which is the major reason for failures in induction motors. Data-driven fault diagnosis systems usually include a classification model which is supported by an efficient pre-processing unit. Various classifiers, which aim to diagnose multiple bearing defects (i.e., ball, inner race and outer race defects of different diameters), require well-processed data. The pre-processing tasks plays a vital role for extracting informative features from the vibration signal, reducing the dimensionality of the features and selecting the best features from the feature pool. Once the vibration signal is perfectly analyzed and a proper feature subset is created, then fault classifiers can be trained. However, classification task can be difficult if the training dataset is not balanced. Induction motors usually operate under healthy condition (than faulty situation), thus the monitored vibration samples relate to the normal state of the system expected to be more than the samples of the faulty state. Here, in this work, this challenge is also considered so that the classification model needs to deal with class imbalance problem

    Imputation-Based Ensemble Techniques for Class Imbalance Learning

    No full text
    Correct classification of rare samples is a vital data mining task and of paramount importance in many research domains. This article mainly focuses on the development of the novel class-imbalance learning techniques, which make use of oversampling methods integrated with bagging and boosting ensembles. Two novel oversampling strategies based on the single and the multiple imputation methods are proposed. The proposed techniques aim to create useful synthetic minority class samples, similar to the original minority class samples, by estimation of missing values that are already induced in the minority class samples. The re-balanced datasets are then used to train base-learners of the ensemble algorithms. In addition, the proposed techniques are compared with the commonly used class imbalance learning methods in terms of three performance metrics including AUC, F-measure, and G-mean over several synthetic binary class datasets. The empirical results show that the proposed multiple imputation-based oversampling combined with bagging significantly outperforms other competitors
    corecore